114 research outputs found
The severity of stages estimation during hemorrhage using error correcting output codes method
As a beneficial component with critical impact, computer-aided decision making systems have infiltrated many fields, such as economics, medicine, architecture and agriculture. The latent capabilities for facilitating human work propel high-speed development of such systems. Effective decisions provided by such systems greatly reduce the expense of labor, energy, budget, etc. The computer-aided decision making system for traumatic injuries is one type of such systems that supplies suggestive opinions when dealing with the injuries resulted from accidents, battle, or illness. The functions may involve judging the type of illness, allocating the wounded according to battle injuries, deciding the severity of symptoms for illness or injuries, managing the resources in the context of traumatic events, etc. The proposed computer-aided decision making system aims at estimating the severity of blood volume loss. Specifically speaking, accompanying many traumatic injuries, severe hemorrhage, a potentially life-threatening condition that requires immediate treatment, is a significant loss of blood volume in process resulting in decreased blood and oxygen perfusion of vital organs. Hemorrhage and blood loss can occur in different levels such as mild, moderate, or severe. Our proposed system will assist physicians by estimating information such as the severity of blood volume loss and hemorrhage , so that timely measures can be taken to not only save lives but also reduce the long-term complications as well as the cost caused by unmatched operations and treatments. The general framework of the proposed research contains three tasks and many novel and transformative concepts are integrated into the system. First is the preprocessing of the raw signals. In this stage, adaptive filtering is adopted and customized to filter noise, and two detection algorithms (QRS complex detection and Systolic/Diastolic wave detection) are designed. The second process is to extract features. The proposed system combines features from time domain, frequency domain, nonlinear analysis, and multi-model analysis to better represent the patterns when hemorrhage happens. Third, a machine learning algorithm is designed for classification of patterns. A novel machine learning algorithm, as a new version of error correcting output code (ECOC), is designed and investigated for high accuracy and real-time decision making. The features and characteristics of this machine learning method are essential for the proposed computer-aided trauma decision making system. The proposed system is tested agasint Lower Body Negative Pressure (LBNP) dataset, and the results indicate the accuracy and reliability of the proposed system
Proximal Symmetric Non-negative Latent Factor Analysis: A Novel Approach to Highly-Accurate Representation of Undirected Weighted Networks
An Undirected Weighted Network (UWN) is commonly found in big data-related
applications. Note that such a network's information connected with its nodes,
and edges can be expressed as a Symmetric, High-Dimensional and Incomplete
(SHDI) matrix. However, existing models fail in either modeling its intrinsic
symmetry or low-data density, resulting in low model scalability or
representation learning ability. For addressing this issue, a Proximal
Symmetric Nonnegative Latent-factor-analysis (PSNL) model is proposed. It
incorporates a proximal term into symmetry-aware and data density-oriented
objective function for high representation accuracy. Then an adaptive
Alternating Direction Method of Multipliers (ADMM)-based learning scheme is
implemented through a Tree-structured of Parzen Estimators (TPE) method for
high computational efficiency. Empirical studies on four UWNs demonstrate that
PSNL achieves higher accuracy gain than state-of-the-art models, as well as
highly competitive computational efficiency
A Dynamic Linear Bias Incorporation Scheme for Nonnegative Latent Factor Analysis
High-Dimensional and Incomplete (HDI) data is commonly encountered in big
data-related applications like social network services systems, which are
concerning the limited interactions among numerous nodes. Knowledge acquisition
from HDI data is a vital issue in the domain of data science due to their
embedded rich patterns like node behaviors, where the fundamental task is to
perform HDI data representation learning. Nonnegative Latent Factor Analysis
(NLFA) models have proven to possess the superiority to address this issue,
where a linear bias incorporation (LBI) scheme is important in present the
training overshooting and fluctuation, as well as preventing the model from
premature convergence. However, existing LBI schemes are all statistic ones
where the linear biases are fixed, which significantly restricts the
scalability of the resultant NLFA model and results in loss of representation
learning ability to HDI data. Motivated by the above discoveries, this paper
innovatively presents the dynamic linear bias incorporation (DLBI) scheme. It
firstly extends the linear bias vectors into matrices, and then builds a binary
weight matrix to switch the active/inactive states of the linear biases. The
weight matrix's each entry switches between the binary states dynamically
corresponding to the linear bias value variation, thereby establishing the
dynamic linear biases for an NLFA model. Empirical studies on three HDI
datasets from real applications demonstrate that the proposed DLBI-based NLFA
model obtains higher representation accuracy several than state-of-the-art
models do, as well as highly-competitive computational efficiency.Comment: arXiv admin note: substantial text overlap with arXiv:2306.03911,
arXiv:2302.12122, arXiv:2306.0364
Experimental demonstration of picometer level signal extraction with time-delay interferometry technique
In this work, we have built an experimental setup to simulate the clock noise
transmission with two spacecrafts and two optical links, and further
demonstrated the extraction of picometer level signal drowned by the large
laser frequency noise and clock noise with the data post-processing method.
Laser frequency noise is almost eliminated by using the idea of time-delay
interferometry (TDI) to construct an equal arm interferometer. Clock
asynchronism and clock jitter noise are significantly suppressed by laser
sideband transmitting the clock noise using an electro-optic modulator (EOM).
Experimental results show a reduction in laser frequency noise by approximately
10^5 and clock noise by 10^2, recovering a weak displacement signal with an
average amplitude about 60 picometer and period 1 second. This work has
achieved the principle verification of the noise reduction function of TDI
technique to some extent, serving the data processing research of space-borne
gravitational wave detection
A Hierarchical Method for Removal of Baseline Drift from Biomedical Signals: Application in ECG Analysis
Noise can compromise the extraction of some fundamental and important features from biomedical signals and hence prohibit accurate analysis of these signals. Baseline wander in electrocardiogram (ECG) signals is one such example, which can be caused by factors such as respiration, variations in electrode impedance, and excessive body movements. Unless baseline wander is effectively removed, the accuracy of any feature extracted from the ECG, such as timing and duration of the ST-segment, is compromised. This paper approaches this filtering task from a novel standpoint by assuming that the ECG baseline wander comes from an independent and unknown source. The technique utilizes a hierarchical method including a blind source separation (BSS) step, in particular independent component analysis, to eliminate the effect of the baseline wander. We examine the specifics of the components causing the baseline wander and the factors that affect the separation process. Experimental results reveal the superiority of the proposed algorithm in removing the baseline wander
Unsupervised Adaptation from Repeated Traversals for Autonomous Driving
For a self-driving car to operate reliably, its perceptual system must
generalize to the end-user's environment -- ideally without additional
annotation efforts. One potential solution is to leverage unlabeled data (e.g.,
unlabeled LiDAR point clouds) collected from the end-users' environments (i.e.
target domain) to adapt the system to the difference between training and
testing environments. While extensive research has been done on such an
unsupervised domain adaptation problem, one fundamental problem lingers: there
is no reliable signal in the target domain to supervise the adaptation process.
To overcome this issue we observe that it is easy to collect unsupervised data
from multiple traversals of repeated routes. While different from conventional
unsupervised domain adaptation, this assumption is extremely realistic since
many drivers share the same roads. We show that this simple additional
assumption is sufficient to obtain a potent signal that allows us to perform
iterative self-training of 3D object detectors on the target domain.
Concretely, we generate pseudo-labels with the out-of-domain detector but
reduce false positives by removing detections of supposedly mobile objects that
are persistent across traversals. Further, we reduce false negatives by
encouraging predictions in regions that are not persistent. We experiment with
our approach on two large-scale driving datasets and show remarkable
improvement in 3D object detection of cars, pedestrians, and cyclists, bringing
us a step closer to generalizable autonomous driving.Comment: Accepted by NeurIPS 2022. Code is available at
https://github.com/YurongYou/Rote-D
Simplified HIV Testing and Treatment in China: Analysis of Mortality Rates Before and After a Structural Intervention.
BackgroundMultistage stepwise HIV testing and treatment initiation procedures can result in lost opportunities to provide timely antiretroviral therapy (ART). Incomplete patient engagement along the continuum of HIV care translates into high levels of preventable mortality. We aimed to evaluate the ability of a simplified test and treat structural intervention to reduce mortality.Methods and findingsIn the "pre-intervention 2010" (from January 2010 to December 2010) and "pre-intervention 2011" (from January 2011 to December 2011) phases, patients who screened HIV-positive at health care facilities in Zhongshan and Pubei counties in Guangxi, China, followed the standard-of-care process. In the "post-intervention 2012" (from July 2012 to June 2013) and "post-intervention 2013" (from July 2013 to June 2014) phases, patients who screened HIV-positive at the same facilities were offered a simplified test and treat intervention, i.e., concurrent HIV confirmatory and CD4 testing and immediate initiation of ART, irrespective of CD4 count. Participants were followed for 6-18 mo until the end of their study phase period. Mortality rates in the pre-intervention and post-intervention phases were compared for all HIV cases and for treatment-eligible HIV cases. A total of 1,034 HIV-positive participants (281 and 339 in the two pre-intervention phases respectively, and 215 and 199 in the two post-intervention phases respectively) were enrolled. Following the structural intervention, receipt of baseline CD4 testing within 30 d of HIV confirmation increased from 67%/61% (pre-intervention 2010/pre-intervention 2011) to 98%/97% (post-intervention 2012/post-intervention 2013) (all p < 0.001 [i.e., for all comparisons between a pre- and post-intervention phase]), and the time from HIV confirmation to ART initiation decreased from 53 d (interquartile range [IQR] 27-141)/43 d (IQR 15-113) to 5 d (IQR 2-12)/5 d (IQR 2-13) (all p < 0.001). Initiation of ART increased from 27%/49% to 91%/89% among all cases (all p < 0.001) and from 39%/62% to 94%/90% among individuals with CD4 count ≤ 350 cells/mm3 or AIDS (all p < 0.001). Mortality decreased from 27%/27% to 10%/10% for all cases (all p < 0.001) and from 40%/35% to 13%/13% for cases with CD4 count ≤ 350 cells/mm3 or AIDS (all p < 0.001). The simplified test and treat intervention was significantly associated with decreased mortality rates compared to pre-intervention 2011 (adjusted hazard ratio [aHR] 0.385 [95% CI 0.239-0.620] and 0.380 [95% CI 0.233-0.618] for the two post-intervention phases, respectively, for all newly diagnosed HIV cases [both p < 0.001], and aHR 0.369 [95% CI 0.226-0.603] and 0.361 [95% CI 0.221-0.590] for newly diagnosed treatment-eligible HIV cases [both p < 0.001]). The unit cost of an additional patient receiving ART attributable to the intervention was US234.52.ConclusionsOur results demonstrate that the simplified HIV test and treat intervention promoted successful engagement in care and was associated with a 62% reduction in mortality. Our findings support the implementation of integrated HIV testing and immediate access to ART irrespective of CD4 count, in order to optimize the impact of ART
High-performance non-Fermi-liquid metallic thermoelectric materials
Searching for high-performance thermoelectric (TE) materials in the paradigm
of narrow-bandgap semiconductors has lasted for nearly 70 years and is
obviously hampered by a bottleneck of research now. Here we report on the
discovery of a few metallic compounds, TiFexCu2x-1Sb and TiFe1.33Sb, showing
the thermopower exceeding many TE semiconductors and the dimensionless figure
of merits comparable with the state-of-the-art TE materials. A quasi-linear
temperature (T) dependence of electrical resistivity in 2 K - 700 K and the
logarithmic T-dependent electronic specific heat at low temperature are also
observed to coexist with the high thermopower, highlighting the strong
intercoupling of the non-Fermi-liquid (NFL) quantum critical behavior of
electrons with TE transports. Electronic structure analysis reveals the
existence of fluctuating Fe-eg-related local magnetic moments, Fe-Fe
antiferromagnetic (AFM) interaction at the nearest 4c-4d sites, and two-fold
degenerate eg orbitals antiferromagnetically coupled with the dual-type
itinerant electrons close to the Fermi level, all of which infer to a
competition between the AFM ordering and Kondo-like spin compensation as well
as a parallel two-channel Kondo effect. These effects are both strongly
meditated by the structural disorder due to the random filling of Fe/Cu at the
equivalent 4c/4d sites of the Heusler crystal lattice. The magnetic
susceptibility deviates from ideal antiferromagnetism but can be fitted well by
x(T) = 1/({\theta} + BT{\alpha}), seemingly being consistent with the quantum
critical scenario of strong local correlation as discussed before. Our work not
only breaks the dilemma that the promising TE materials should be heavily-doped
semiconductors, but also demonstrates the correlation among high TE
performance, NFL quantum criticality, and magnetic fluctuation, which opens up
new directions for future research.Comment: 19 pages with 6 figure
Pre-Training LiDAR-Based 3D Object Detectors Through Colorization
Accurate 3D object detection and understanding for self-driving cars heavily
relies on LiDAR point clouds, necessitating large amounts of labeled data to
train. In this work, we introduce an innovative pre-training approach, Grounded
Point Colorization (GPC), to bridge the gap between data and labels by teaching
the model to colorize LiDAR point clouds, equipping it with valuable semantic
cues. To tackle challenges arising from color variations and selection bias, we
incorporate color as "context" by providing ground-truth colors as hints during
colorization. Experimental results on the KITTI and Waymo datasets demonstrate
GPC's remarkable effectiveness. Even with limited labeled data, GPC
significantly improves fine-tuning performance; notably, on just 20% of the
KITTI dataset, GPC outperforms training from scratch with the entire dataset.
In sum, we introduce a fresh perspective on pre-training for 3D object
detection, aligning the objective with the model's intended role and ultimately
advancing the accuracy and efficiency of 3D object detection for autonomous
vehicles
- …